Economy Health Politics Local 2025-11-21T22:24:52+00:00

Artificial Intelligence: A Double-Edged Sword for Cybersecurity in Argentina

In Argentina, AI is simultaneously a tool for cybercriminals and the foundation of defense for companies and governments. The rise of fraud using deepfakes and deep learning poses new challenges for digital security.


Artificial Intelligence: A Double-Edged Sword for Cybersecurity in Argentina

The democratization of artificial intelligence makes it easier today to impersonate someone. While cybercriminals use it to design more effective and less detectable attacks, governments and companies use it to create dynamic barriers that respond in real-time. “Identity verification processes are now compromised by existing market risks,” warns Iñigo Castillo, General Manager for Latin America at Incode. In Argentina, UFECI already generates crime statistics where 80% of cybercrimes correspond to online fraud and identity theft. In this triangle, AI has become an indispensable ally, from machine learning that automates fraud patterns to algorithms that model attack scenarios. The conclusion is clear: artificial intelligence is no longer just a tool of the future, but a central variable in the present of digital security.

Buenos Aires, November 22, 2025 – Total News Agency-TNA – Artificial intelligence (AI) has consolidated itself as a double-edged tool in the global digital landscape: on one hand, it enables more complex frauds operated by cybercriminals; on the other, it constitutes the main line of defense that governments and companies deploy to protect transactions, identities, and critical networks. The winners of this new battle will be those who understand that AI and cybersecurity are not distinct concepts, but the two sides of the same coin.

“Artificial intelligence is… our mind, our reasoning,” adds Pablo Poza, an associate at Together Business Consulting. Identity verification technologies based on deep learning now try to distinguish what is real from what is fake: “Today anyone can create a deepfake and deceive the human senses even if it is of medium quality,” states Castillo.

In Argentina, the numbers reflect a world in full transformation: according to the Biocatch biometrics platform report, 262 cyberattacks were recorded in the first three quarters of the year, representing a 63% increase compared to the same period in Latin America. Digital fraud has mutated at a speed faster than traditional security systems. 61% of organizations in Argentina experienced security breaches in the last year, while 29% reported direct economic losses from digital attacks. In fact, the local cybercrime authority indicated that 47% of security professionals detected the use of AI in attempted attacks, while 92% estimate that this technology will be key for corporate defense in the coming years.

The experts' testimonies highlight the dual nature of AI. At the macro level, the development of AI systems for monitoring, anomaly detection, and automated response has become a priority to secure financial systems, supply chains, and digital government. This scenario poses a greater challenge: the speed of technological change has made many security protocols designed for conventional risks obsolete. At the user level, 76% admitted to being a victim of some form of online scam.

In the midst of this transformation, AI acts as a catalyst for crime. In this context, digital resilience becomes imperative: “Financial sites need to adopt a comprehensive strategy based on digital resilience, a vision that integrates real-time monitoring, automation, and intelligent data analysis throughout the ecosystem,” emphasizes a study by OCP Tech.

For companies, the implications go beyond the technical. However, this preventive capability faces a reality where the sophistication of the attack outpaces the defense infrastructure. One of the pillars of the fraudulent offensive is false identity. If a payment platform is attacked, would you deposit money there again?

Multimodal AI, which analyzes everything from camera depth to voice patterns, has become essential to ensure that a remote person and a document are not impersonated by an algorithm. Governments have also stepped onto this new frontier and have activated intensive defensive strategies. “At the same time, it creates solutions for all those actors to cover themselves in the face of a possible onslaught,” affirms Castillo.

“The first is reputation. The second is IT: to trace where they entered, what they did. The third is how to repair: what was lost, how to recover it,” says Poza. The consultancy Gartner predicts that in a few more years, more than 25% of all consumer interactions will be fully managed by autonomous agents. “It doesn't need the human error on the table. It anticipates something. Very possibly, no,” warns Poza.

The intensive use of mobile devices, the proliferation of e-commerce, and the release of payments through multiple channels have generated fertile ground for new forms of extortion, device takeover, fake accounts with deepfakes, and threats to critical infrastructures. “Deepfakes” of faces and synthetic voices have driven a 300% increase in fraud in the opening of bank accounts on digital platforms.